perm filename ARTIFI.XGP[W78,JMC] blob
sn#353801 filedate 1978-05-09 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=SAIL25/FONT#1=BAXM30/FONT#2=BAXB30/FONT#3=SUB/FONT#4=SUP/FONT#5=BASL35/FONT#6=NGR25/FONT#7=MATH30/FONT#8=FIX25/FONT#9=GRK30/FONT#10=ZERO30/FONT#11=BAXI30/FONT#12=MS25
␈↓ ↓H␈↓ARTIFICIAL INTELLIGENCE (AI),
␈↓ ↓H␈↓␈↓ αλthe branch of computer science concerned with making
␈↓ ↓H␈↓machines behave intelligently. Ultimately, AI
␈↓ ↓H␈↓researchers hope to understand intelligence well enough
␈↓ ↓H␈↓to make computer programs with human-level or higher
␈↓ ↓H␈↓general intelligence. Programs now exist that play
␈↓ ↓H␈↓chess and other games at expert level, recognize
␈↓ ↓H␈↓connected speech using a limited vocabulary, answer
␈↓ ↓H␈↓questions about stories, prove theorems in certain
␈↓ ↓H␈↓branches of mathematics, and recognize objects in scenes
␈↓ ↓H␈↓using a television camera. However, these programs have
␈↓ ↓H␈↓limited ability, no creativity, and each works in its
␈↓ ↓H␈↓own limited domain.
␈↓ ↓H␈↓␈↓ α_AI research has two aspects: The theoretical part
␈↓ ↓H␈↓involves discovering what intellectual mechanisms exist
␈↓ ↓H␈↓and how they interact in achieving goals. The
␈↓ ↓H␈↓experimental part consists of writing computer programs
␈↓ ↓H␈↓or building machines to solve particular problems or
␈↓ ↓H␈↓behave intelligently in particular kinds of situations.
␈↓ ↓H␈↓Sometimes the experimental programs have practical
␈↓ ↓H␈↓goals, but more often their purpose is to verify that
␈↓ ↓H␈↓certain methods will achieve certain goals.
␈↓ ↓H␈↓␈↓ α_Intelligent machines are old in mythology, science
␈↓ ↓H␈↓fiction and swindling. In the early 1900s, the Spanish
␈↓ ↓H␈↓inventor Torres y Quevedo made a machine that could
␈↓ ↓H␈↓checkmate its opponent with a rook and king against a
␈↓ ↓H␈↓king, and speculated intelligently about possible
␈↓ ↓H␈↓intelligent machines. However, systematic work on AI
␈↓ ↓H␈↓began only after the invention of digital computers.
␈↓ ↓H␈↓The first serious scientific article on AI was written
␈↓ ↓H␈↓by the British mathematician Alan Turing in 1950, and
␈↓ ↓H␈↓the first full time research group was founded in 1954
␈↓ ↓H␈↓at Carnegie-Mellon University by Allen Newell and
␈↓ ↓H␈↓Herbert Simon.
␈↓ ↓H␈↓␈↓ α_The main areas of AI research, and some of the
␈↓ ↓H␈↓techniques used, are described in what follows.
␈↓ ↓H␈↓␈↓ αλSEARCH. When a computer must act, it usually has
␈↓ ↓H␈↓several choices. Each choice may lead to a number of
␈↓ ↓H␈↓different consequences, depending on nature or on the
␈↓ ↓H␈↓actions of an opponent in a game, and each consequence
␈↓ ↓H␈↓may result in new possible actions, and so on.
␈↓ ↓H␈↓␈↓ α_The main problem in searching through the "tree of
␈↓ ↓H␈↓possibilities" is the so-called "combinatorial
␈↓ ↓H␈↓explosion". If there are 10 possibilities at each level
␈↓ ↓H␈↓of search, then 10 levels of search lead to 10 billion
␈↓ ↓H␈↓possibilities, and this is infeasible even with fast
␈↓ ↓H␈↓computers. Therefore, the programmer must devise
␈↓ ↓H␈↓␈↓↓heuristics␈↓ that allow the program to reject most of the
␈↓ ↓H␈↓alternatives, even at the risk of sometimes missing the
␈↓ ↓H␈↓best answer. Thus when there isn't time to evaluate
␈↓ ↓H␈↓each line of play to the end, the program must decide
␈↓ ↓H␈↓when it can stop searching and approximately evaluate
␈↓ ↓H␈↓the postion.
␈↓ ↓H␈↓␈↓ α_The performance of computer chess programs is a
␈↓ ↓H␈↓good measure of progress in this part of AI. The chess
␈↓ ↓H␈↓programs of the 1950's played much worse than the people
␈↓ ↓H␈↓who wrote them, because the programmers, some of whom
␈↓ ↓H␈↓were master level players, didn't understand their own
␈↓ ↓H␈↓mental processes well enough to discover the methods
␈↓ ↓H␈↓they themselves used to limit search. Since then,
␈↓ ↓H␈↓enough of the ways in which humans limit search have
␈↓ ↓H␈↓been discovered so that in 1977, a program written by
␈↓ ↓H␈↓David Slate and Lawrence Atkins won the Minnesota Open
␈↓ ↓H␈↓Tournament with a master-level performance. (It lost
␈↓ ↓H␈↓the subsequent tournament for the state championship).
␈↓ ↓H␈↓However, this performance depended on a very fast
␈↓ ↓H␈↓computer looking at hundreds of thousands of positions,
␈↓ ↓H␈↓so it is clear that many of the human ways of limiting
␈↓ ↓H␈↓search remain to be understood.
␈↓ ↓H␈↓␈↓ α_For example, one early program written by a chess
␈↓ ↓H␈↓master looked at the seven most "plausible" moves and
␈↓ ↓H␈↓the seven plausible replies to each of these and seven
␈↓ ↓H␈↓replies to each of these and seven to each of these for
␈↓ ↓H␈↓a total of 7␈↓πx␈↓7␈↓πx␈↓7␈↓πx␈↓7 = 2401 final positions. Most of
␈↓ ↓H␈↓these positions were examined unnecessarily. When a
␈↓ ↓H␈↓move has been shown to be worse than a previously
␈↓ ↓H␈↓examined move of the same player by finding one reply
␈↓ ↓H␈↓that refutes it, there is no need to look for refuting
␈↓ ↓H␈↓moves. This observation (illustrated in the figure) has
␈↓ ↓H␈↓been elaborated into a method called the alpha-beta
␈↓ ↓H␈↓heuristic for reducing search and is used by all modern
␈↓ ↓H␈↓game playing programs. The joke is that it is also used
␈↓ ↓H␈↓by all human players, modern or not, and whether they
␈↓ ↓H␈↓realize it or not.
␈↓ ↓H␈↓␈↓ α_GOALS AND SUBGOALS. Attaining a goal often
␈↓ ↓H␈↓requires finding a sequence of actions on the basis of
␈↓ ↓H␈↓information about the effects of, and about the
␈↓ ↓H␈↓preconditions for, successful performance of certain
␈↓ ↓H␈↓actions. A simple example is building a tower of blocks
␈↓ ↓H␈↓where a precondition for placing one block on another is
␈↓ ↓H␈↓that both the block to be moved and the place where it
␈↓ ↓H␈↓goes should be clear. Thus moving one block may require
␈↓ ↓H␈↓moving other blocks first. Gerald Sussman's program for
␈↓ ↓H␈↓designing electronic circuits required more elaborate
␈↓ ↓H␈↓treatment of how doing one action affects the
␈↓ ↓H␈↓preconditions for others.
␈↓ ↓H␈↓␈↓ α_KNOWLEDGE REPRESENTATION. Many of the difficulties
␈↓ ↓H␈↓in making machines perform tasks turn on the question of
␈↓ ↓H␈↓deciding what information the program should have, how
␈↓ ↓H␈↓further conclusions can be drawn from initial
␈↓ ↓H␈↓information, and how the information should be stored in
␈↓ ↓H␈↓the computer. These difficulties have led to research
␈↓ ↓H␈↓on the subject of what is knowledge. Many of the
␈↓ ↓H␈↓questions asked are those studied by philosophers under
␈↓ ↓H␈↓the name of ␈↓↓epistemology␈↓ - the theory of knowledge.
␈↓ ↓H␈↓Mathematical logic has provided powerful means of
␈↓ ↓H␈↓representing facts in computers and powerful modes of
␈↓ ↓H␈↓reasoning. However, it has turned out that not all
␈↓ ↓H␈↓modes of reasoning performed by humans and needed for
␈↓ ↓H␈↓problem solving are represented in present systems of
␈↓ ↓H␈↓mathematical logic. Logic is excellent for those
␈↓ ↓H␈↓methods of reasoning that never lead to error from true
␈↓ ↓H␈↓premises, but intelligence also requires methods of
␈↓ ↓H␈↓reasoning that generate conjectures that are sometimes
␈↓ ↓H␈↓false. The work of John McCarthy at Stanford University
␈↓ ↓H␈↓has shown the importance and difficulty of representing
␈↓ ↓H␈↓facts about situations in which new actions may start
␈↓ ↓H␈↓while others are in progress. Knowledge about knowledge
␈↓ ↓H␈↓has also been studied. A computer program that can plan
␈↓ ↓H␈↓a trip must know that it cannot find a flight to New
␈↓ ↓H␈↓York by thinking about it, but travel agents know about
␈↓ ↓H␈↓airplane flights, and their telephone numbers are in the
␈↓ ↓H␈↓Yellow Pages.
␈↓ ↓H␈↓␈↓ α_AI and the Everyday World
␈↓ ↓H␈↓␈↓ α_Besides dealing with purely symbolic problems such
␈↓ ↓H␈↓as chess and mathematics, we want intelligent machines
␈↓ ↓H␈↓that can see, hear, control motion, and make things.
␈↓ ↓H␈↓␈↓ α_PATTERN MATCHING. Programs have been written that
␈↓ ↓H␈↓find objects by tracing outlines of color or brightness
␈↓ ↓H␈↓changes, or by growing regions of a single color or
␈↓ ↓H␈↓texture. Such tasks require a computer to store
␈↓ ↓H␈↓patterns and to recognize objects by matching patterns.
␈↓ ↓H␈↓A computer might store its idea of a dog as a collection
␈↓ ↓H␈↓of legs, tail, etc., each of a given shape and connected
␈↓ ↓H␈↓in the right way. It can try to find dogs in a picture
␈↓ ↓H␈↓by matching the dog pattern with parts of the picture.
␈↓ ↓H␈↓The program must compute how a dog will appear with its
␈↓ ↓H␈↓legs in some position when looked at from some angle and
␈↓ ↓H␈↓partly hidden by other objects. Actually, in order to
␈↓ ↓H␈↓recognize dogs it must do this process backwards,
␈↓ ↓H␈↓deciding what kind of dog would lead to the appearance
␈↓ ↓H␈↓it sees.
␈↓ ↓H␈↓␈↓ α_General principles of pattern description and
␈↓ ↓H␈↓techniques for pattern matching apply to recognizing
␈↓ ↓H␈↓dogs, to recognizing chess configurations, and to many
␈↓ ↓H␈↓other problems.
␈↓ ↓H␈↓␈↓ α_Patterns of action called frames have been studied
␈↓ ↓H␈↓by Marvin L. Minsky and his students at Massachusetts
␈↓ ↓H␈↓Institute of Technology. A typical frame is the event
␈↓ ↓H␈↓of visiting a restaurant; subframes would consist of
␈↓ ↓H␈↓being seated, ordering, waiting, eating, paying the
␈↓ ↓H␈↓bill, and leaving. Such frames have been used by Roger
␈↓ ↓H␈↓C. Schank at Yale University in programs that answer
␈↓ ↓H␈↓questions about stories, and also fill in information
␈↓ ↓H␈↓omitted from a story, because it is implicit in the
␈↓ ↓H␈↓frame.
␈↓ ↓H␈↓␈↓ α_UNDERSTANDING LANGUAGE. Programs that recognize
␈↓ ↓H␈↓and produce human speech have been written. Some such
␈↓ ↓H␈↓programs recognize hundreds of isolated words; some can
␈↓ ↓H␈↓handle connected speech if it is not too difficult.
␈↓ ↓H␈↓␈↓ α_One idea for making an intelligent machine is to
␈↓ ↓H␈↓first make it capable of understanding English, and then
␈↓ ↓H␈↓to let it read textbooks, encyclopedias, and scientific
␈↓ ↓H␈↓articles. There are computer programs that can read
␈↓ ↓H␈↓stories from first grade readers and answer some
␈↓ ↓H␈↓questions about them. Other programs can converse with
␈↓ ↓H␈↓physicians about bacterial diseases of the blood. All
␈↓ ↓H␈↓present programs, however, are quite limited in the
␈↓ ↓H␈↓subject matter they can understand. The difficulty is
␈↓ ↓H␈↓not one of vocabulary - whole dictionaries are available
␈↓ ↓H␈↓on tape - but rather that understanding encyclopedia
␈↓ ↓H␈↓articles requires more initial knowledge and ability to
␈↓ ↓H␈↓reason than can now be programmed.
␈↓ ↓H␈↓␈↓ α_In the 1950's programs were written to translate
␈↓ ↓H␈↓from one language to another. They weren't very good,
␈↓ ↓H␈↓and it was some years before everyone was convinced that
␈↓ ↓H␈↓successful translation requires programs that
␈↓ ↓H␈↓"understand" the material being translated. Efforts are
␈↓ ↓H␈↓now being made to give computers an increasingly better
␈↓ ↓H␈↓understanding of larger and larger fragments of natural
␈↓ ↓H␈↓language. This "understanding" is tested by the
␈↓ ↓H␈↓performance of question-answering programs that answer
␈↓ ↓H␈↓questions about a text on the basis of information in
␈↓ ↓H␈↓the text and the common sense knowledge possessed by the
␈↓ ↓H␈↓program.
␈↓ ↓H␈↓␈↓ α_One program combined English dialogue and problem
␈↓ ↓H␈↓solving to answer questions and perform requested
␈↓ ↓H␈↓actions in a simulated "blocks world". Devised by Terry
␈↓ ↓H␈↓Winograd, SHRDLU could be requested, "Pick up the red
␈↓ ↓H␈↓pyramid and put it on the green block." SHRDLU would
␈↓ ↓H␈↓figure out that "it" referred to the red pyramid. It
␈↓ ↓H␈↓would clear off the green block if necessary, and would
␈↓ ↓H␈↓ask, "Which green block?" if there was more than one.
␈↓ ↓H␈↓␈↓ α_LEARNING FROM EXPERIENCE. Humans and animals learn
␈↓ ↓H␈↓from experience, and machines should do likewise. An
␈↓ ↓H␈↓early example was a program by Arthur Samuel for playing
␈↓ ↓H␈↓checkers. The behavior of the program was determined by
␈↓ ↓H␈↓certain numbers that determined how it evaluated
␈↓ ↓H␈↓positions; for example, the relative values of a king
␈↓ ↓H␈↓and a single man. The program would read books of games
␈↓ ↓H␈↓played by master players and adjust the numbers until
␈↓ ↓H␈↓they predicted as many as possible of the moves regarded
␈↓ ↓H␈↓as good by the experts. Combined with search, this
␈↓ ↓H␈↓makes an excellent program. A version of it, without
␈↓ ↓H␈↓the learning, is sold for playing through a television
␈↓ ↓H␈↓set.
␈↓ ↓H␈↓␈↓ α_Patrick Winston of Massachusetts Institute of
␈↓ ↓H␈↓Technology wrote a program that learns to classify
␈↓ ↓H␈↓objects according to the presence of subobjects related
␈↓ ↓H␈↓in a specified way. In Winston's program, an arch is
␈↓ ↓H␈↓recognized as consisting of three objects one of which
␈↓ ↓H␈↓is supported by the other two which are not touching
␈↓ ↓H␈↓each other. An arcade is learned as a line of arches
␈↓ ↓H␈↓arranged so that there is a path under all of them.
␈↓ ↓H␈↓␈↓ α_To teach someone to play tic-tac-toe or solve an
␈↓ ↓H␈↓algebra problem, it is enough to tell him the rules and
␈↓ ↓H␈↓rely on his intelligence to apply them. The ability of
␈↓ ↓H␈↓a program to learn from experience depends on how its
␈↓ ↓H␈↓own behavior is represented within the machine. If the
␈↓ ↓H␈↓program is to learn efficiently, then what a human would
␈↓ ↓H␈↓regard as simple changes in behavior must be represented
␈↓ ↓H␈↓by small changes in the way the behavior is represented.
␈↓ ↓H␈↓With present computer programs, in order to change a
␈↓ ↓H␈↓program's behavior, one must understand the program
␈↓ ↓H␈↓thoroughly and make changes in a number of places. It
␈↓ ↓H␈↓is as though education were accomplished by brain
␈↓ ↓H␈↓surgery. A major research goal of AI is to develop
␈↓ ↓H␈↓programs with common sense, able to combine instructions
␈↓ ↓H␈↓with their own knowledge. Making a computer that can
␈↓ ↓H␈↓learn well therefore requires progress in knowledge and
␈↓ ↓H␈↓its representation.
␈↓ ↓H␈↓␈↓ α_INDUSTRIAL ROBOTS. Robots that do strenuous or
␈↓ ↓H␈↓dangerous tasks are in increasing industrial use. The
␈↓ ↓H␈↓earliest were programmed to repeat the same sequence of
␈↓ ↓H␈↓actions, such as putting an object found in a fixed
␈↓ ↓H␈↓location in a punch press or a furnace, and removing it
␈↓ ↓H␈↓later. Even such crude robots are more flexible than
␈↓ ↓H␈↓building a special machine for each job and scrapping it
␈↓ ↓H␈↓when the job changes. Recent industrial robots have
␈↓ ↓H␈↓programs whose actions depend on sensing the
␈↓ ↓H␈↓environment, and some even use television cameras to
␈↓ ↓H␈↓locate objects. General purpose assembly robots and
␈↓ ↓H␈↓languages for programming them are now being developed
␈↓ ↓H␈↓at Stanford University, General Motors and elsewhere.
␈↓ ↓H␈↓Another program drives a vehicle so as to avoid
␈↓ ↓H␈↓obstacles.
␈↓ ↓H␈↓␈↓ α_EXPERT PROGRAMS. Edward A. Feigenbaum and Joshua
␈↓ ↓H␈↓Lederberg at Stanford University pioneered the
␈↓ ↓H␈↓development of programs that embody the knowledge of an
␈↓ ↓H␈↓expert in some field. Such programs are developed by
␈↓ ↓H␈↓interviewing experts and getting them to help improve
␈↓ ↓H␈↓further versions of a program. DENDRAL is expert in
␈↓ ↓H␈↓determining the structure of an organic compound from
␈↓ ↓H␈↓mass spectrograph observations, and MYCIN helps a doctor
␈↓ ↓H␈↓diagnose bacterial infections of the blood, recommending
␈↓ ↓H␈↓tests and treatment, and recommending further tests and
␈↓ ↓H␈↓treatment based on the results of the first. MYCIN is
␈↓ ↓H␈↓intended only to make suggestions; the doctor still must
␈↓ ↓H␈↓understand the reasons for everything he does.
␈↓ ↓H␈↓␈↓ α_INFORMATION PROCESSING PSYCHOLOGY. Artificial
␈↓ ↓H␈↓intelligence and the study of human intelligence are
␈↓ ↓H␈↓closely related. The information processing approach to
␈↓ ↓H␈↓psychology is mainly replacing behaviorism, which was
␈↓ ↓H␈↓chiefly concerned with finding direct relations between
␈↓ ↓H␈↓stimuli received by organisms and their responses. The
␈↓ ↓H␈↓information processing approach, initiated by Allen
␈↓ ↓H␈↓Newell and Herbert Simon in the 1950's writes computer
␈↓ ↓H␈↓programs that match complex problem-solving behavior.
␈↓ ↓H␈↓Unlike stimulus-response theories, information
␈↓ ↓H␈↓processing theories do not postulate direct relations
␈↓ ↓H␈↓between inputs and outputs, because the internal
␈↓ ↓H␈↓processes can be very complex. Both artificial
␈↓ ↓H␈↓intelligence and information processing psychology must
␈↓ ↓H␈↓determine what intellectual mechanisms are required to
␈↓ ↓H␈↓solve different kinds of problems.
␈↓ ↓H␈↓␈↓ α_THE STUDY OF AI. Artificial intelligence is a
␈↓ ↓H␈↓young and difficult branch of science. Studying it
␈↓ ↓H␈↓requires the ability to program a computer, especially
␈↓ ↓H␈↓using programming languages such as LISP (list
␈↓ ↓H␈↓processing), which is popular in AI research. Also
␈↓ ↓H␈↓important is the study of the theory of the correctness
␈↓ ↓H␈↓of computer programs, and complexity theory - the study
␈↓ ↓H␈↓of how much computing is required to solve various kinds
␈↓ ↓H␈↓of problems. Some background in mathematical logic is
␈↓ ↓H␈↓also necessary.
␈↓ ↓H␈↓␈↓ α_In AI, there is less to learn than in physics or
␈↓ ↓H␈↓mathematics in order to reach the frontier of the
␈↓ ↓H␈↓subject. Much of what the student learns is
␈↓ ↓H␈↓controversial; some of it will probably be shown wrong.
␈↓ ↓H␈↓Besides connections with psychology, artificial
␈↓ ↓H␈↓intelligence needs facts from and contributes to
␈↓ ↓H␈↓mathematical logic, linguistics, and the physiology of
␈↓ ↓H␈↓the nervous system. Finally, AI studies many questions
␈↓ ↓H␈↓that are also studied by philosophers from a different
␈↓ ↓H␈↓point of view.
␈↓ ↓H␈↓␈↓ α_The ultimate goal of AI is to understand
␈↓ ↓H␈↓intelligence well enough to make computers more
␈↓ ↓H␈↓intelligent than people. Some scientists do not think
␈↓ ↓H␈↓this possible, while others think they are close to
␈↓ ↓H␈↓success. Most would agree that while present methods
␈↓ ↓H␈↓will lead to further accomplishments, a breakthrough in
␈↓ ↓H␈↓giving programs a general view of the world is required
␈↓ ↓H␈↓before human-level intelligence is achieved.
␈↓ ↓H␈↓␈↓ α_John McCarthy